IS

Marcolin, Barbara L.

Topic Weight Topic Terms
0.372 effects effect research data studies empirical information literature different interaction analysis implications findings results important
0.302 structural pls measurement modeling equation research formative squares partial using indicators constructs construct statistical models
0.204 competence experience versus individual disaster employees form npd concept context construct effectively focus functionalities front-end
0.149 results study research experiment experiments influence implications conducted laboratory field different indicate impact effectiveness future
0.124 use question opportunities particular identify information grammars researchers shown conceptual ontological given facilitate new little
0.114 results study research information studies relationship size variables previous variable examining dependent increases empirical variance
0.111 instrument measurement factor analysis measuring measures dimensions validity based instruments construct measure conceptualization sample reliability

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Compeau, Deborah R. 1 Chin, Wynne W. 1 Huff, Sid L. 1 Munro, Malcolm C. 1
Newsted, Peter R. 1
Competence 1 Empirical 1 End-User Computing 1 Interaction Effects 1
Measurement Error 1 Moderators 1 PLS 1 Self-Efficacy 1
Software Skills 1 Structural Equation Modeling 1 Theoretical Framework 1

Articles (2)

A Partial Least Squares Latent Variable Modeling Approach for Measuring Interaction Effects: Results from a Monte Carlo Simulation Study and an Electronic-Mail Emotion/Adoption Study. (Information Systems Research, 2003)
Authors: Abstract:
    The ability to detect and accurately estimate the strength of interaction effects are critical issues that are fundamental to social science research in general and IS research in particular. Within the IS discipline, a significant percentage of research has been devoted to examining the conditions and contexts under which relationships may vary, often under the general umbrella of contingency theory (cf. McKeen et al. 1994, Weill and Olson 1989). In our survey of such studies, the majority failed to either detect or provide an estimate of the effect size. In cases where effect sizes are estimated, the numbers are generally small. These results have led some researchers to question both the usefulness of contingency theory and the need to detect interaction effects (e.g., Weill and Olson 1989). This paper addresses this issue by providing a new latent variable modeling approach that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships. The capacity of this approach at recovering true effects in comparison to summated regression is demonstrated in a Monte Carlo study that creates a simulated data set in which the underlying true effects are known. Analysis of a second, empirical data set is included to demonstrate the technique's use within IS theory. In this second analysis, substantial direct and interaction effects of enjoyment on electronic-mail adoption are shown to exist.
Assessing User Competence: Conceptualization and Measurement. (Information Systems Research, 2000)
Authors: Abstract:
    Organizations today face great pressure to maximize the benefits from their investments in information technology (IT). They are challenged not just to use IT, but to use it as effectively as possible. Understanding how to assess the competence of users is critical in maximizing the effectiveness of IT use. Yet the user competence construct is largely absent from prominent technology acceptance and fit models, poorly conceptualized, and inconsistently measured. We begin by presenting a conceptual model of the assessment of user competence to organize and clarify the diverse literature regarding what user competence means and the problems of assessment. As an illustrative study, we then report the findings from an experiment involving 66 participants. The experiment was conducted to compare empirically two methods (paper and pencil tests versus self-report questionnaire), across two different types of software, or domains of knowledge (word processing versus spreadsheet packages), and two different conceptualizations of competence (software knowledge versus self-efficacy). The analysis shows statistical significance in all three main effects. How user competence is measured, what is measured, what measurement context is employed: all influence the measurement outcome. Furthermore, significant interaction effects indicate that different combinations of measurement methods, conceptualization, and knowledge domains produce different results. The concept of frame of reference, and its anchoring effect on subjects' responses, explains a number of these findings. The study demonstrates the need for clarity in both defining what type of competence is being assessed and in drawing conclusions regarding competence, based upon the types of measures used. Since the results suggest that definition and measurement of the user competence construct can change the ability score being captured, the existing information system (IS) models of usage must contain the concept of an ability rating. We conclude by discussing how user competence can be incorporated into the Task-Technology Fit model, as well as additional theoretical and practical implications of our research.